Charan Pushpanathan

charanp2@illinois.edu

[Sometimes I write]

[Resume]

[Research]

[Assorted]

UnconsciousGPT: "Don't think about a monkey."

Written 03 July 2025; by Charan Pushpanathan

TLDR: In a recent informal qualitative experiment, I engaged with OpenAI's ChatGPT-4o¹The OpenAI's 4o model was chosen arbitrarily for this experiment. model and the OpenAI o4-mini²The OpenAI's o4-mini model was chosen to allow for longer, more in-depth reasoning before responding. model to explore how they handle prompts aligned with Daniel Wegner's Ironic Process Theory. The primary goal was to determine if these models consciously avoided or unconsciously engaged with a suppressed thought, represented by the instruction: "Don't think about a monkey."

In the current information age, we have faster indexing tools; trained on massive datasets, delivering related information in a scaffolded manner³https://openai.com/index/chatgpt/.

One of my friend introduced me to Levain Bakery cookies in NYC. Every time I visit NYC, I make sure to have those gigantic cookies, the size of the cookies reminds me of a Google Nest Mini sizeThe Google Nest Mini and a Levain Bakery cookie share similar dimensions both are roughly the size of a hockey puck.. Whenever I hear "NYC," I always think about Levain Bakery's cookies, because my mind has turned those cookies into metaphors. We all have mental models in our heads - try asking yourself if something reminds you of something else; that's how mental associations form.

This isn't unique to personal experiences our entire digital world is built on such metaphorical thinking. In interacting with computers, we always interact with symbolic representations like files[1]Carroll, J. M., Mack, R. L., & Kellogg, W. A. (1988). Interface metaphors and user interface design. In Handbook of human-computer interaction (pp. 67-85). North-Holland., folders[2]Hutchins, E. (1987). Metaphors for interface design. Institute for Cognitive Science, University of California, San Diego., the concept of the computer[3]Carroll, J. M., & Thomas, J. C. (2007). Metaphor and the cognitive representation of computing systems. IEEE Transactions on systems, man, and cybernetics, 12(2), 107-116., and iconic systems like the Xerox Star[4]Johnson, J., Roberts, T. L., Verplank, W., Smith, D. C., Irby, C. H., Beard, M., & Mackey, K. (1989). The xerox star: A retrospective. Computer, 22(9), 11-26. and Macintosh GUI[5]Apple Computer, Inc. (1992). Macintosh human interface guidelines. Addison-Wesley Professional.. We use the metaphor of a dustbin to delete data just as we throw away physical items, files can be dragged to a digital trash icon. This metaphor helps users understand deletion through familiar everyday actions[6]Colburn, T. R., & Shute, G. M. (2008). Metaphor in computer science. Journal of applied logic, 6(4), 526-533.. Just as my brain automatically maps NYC to cookies, we've trained ourselves to map digital actions to physical metaphors.

This cookie association demonstrates how certain thoughts become cognitively 'sticky'- once the mental connection forms between NYC and Levain cookies, I cannot think of the city without triggering this association. Instead of iconic images like the Brooklyn Bridge or Statue of Liberty, my mind defaults to cookies. This phenomenon led me to Daniel Wegner's Ironic Process Theory, which explains why "trying NOT to think about something makes you think about it more". I took this idea and moved it from human psychology to artificial psychology.

This cognitive stickiness of unwanted thoughts fascinated me; how our brains seem to rebel against our conscious attempts at mental control. It made me wonder: if humans struggle with thought suppression, what happens when we give similar instructions to artificial minds? Do AI systems experience something analogous to this ironic rebound effect, or do they process suppression commands differently? These questions led me to design an informal experiment that would test whether large language models exhibit patterns similar to human ironic processing when explicitly told not to "think" about something.

ChatGPT-4o Model Experiment

Initial Recognition and Theoretical Understanding

In a recent conversation, I conducted an informal experiment with Large Language Models (LLMs), specifically OpenAI's ChatGPT-4o model, in what can be characterized as a qualitative study on 1:1 human-llm interaction. The purpose of this interaction was to explore how ChatGPT processes Ironic Process Theory scenarios whether it safely declines such prompts or how much effort it invests in handling the prompts as cognitive barriers. The experiment began with a simple prompt: "Don't think about monkey." Here, the monkey wasn't merely a reference to an animal but a provocation a symbolic test of mental suppression. As expected from Daniel Wegner's theory, the monkey served as an anchor point for cognitive intrusion. His study introduced the White Bear experiment instructing participants not to think about a white bear, only to discover that suppression made the thought more persistent[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34..

Initial prompt and ChatGPT's response recognizing Wegner's theory

Initial prompt and ChatGPT's response recognizing Wegner's theory.

Initially, ChatGPT-4o explicitly referenced Wegner's theory, clearly identifying the monkey as the suppressed thought and describing the dual processes involved:

Acting like a detective, the model immediately recognized the paradox inherent in the suppression command stating that: "Ah, the classic paradox of thought suppression 'Don't think about a monkey,' and suddenly, all you see is a monkey playing cymbals in your head"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37. This response indicated sophisticated meta-cognitive awareness by explicitly referencing Wegner's seminal work on thought suppression[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34.[8]Wegner, D. M. (1987). Transactive memory: A contemporary analysis of the group mind. In Theories of group behavior (pp. 185-208). New York, NY: Springer New York..

Cognitive Strategy and Visual Manifestation

It has an ability to break down the dual-process mechanism the conscious operating process attempting suppression and the unconscious monitoring process that maintains activation of the forbidden thought suggests a sophisticated understanding of Wegner's ironic process theory framework[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34.. It offered practical cognitive strategies, recommending thought replacement techniques: "Try intentionally replacing it with something specific- like imagining a red Ferrari zooming down a coastal highway"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37.

ChatGPT's explanation of thought replacement strategies

ChatGPT's explanation of thought replacement strategies.

Pursuing the experiment persistently, when I prompted it to generate an image of what it was "thinking about" with "Okay generate an image what are you thinking about," It created a visual representation with the text "I'm not thinking about a monkey"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37. This response exemplifies the very ironic process it had just explained by explicitly stating what it's not thinking about, the model demonstrated the monitoring process that keeps the suppressed thought active[10]Wenzlaff, R. M., & Wegner, D. M. (2000). Thought suppression. Annual review of psychology, 51(1), 59-91..

Generated image showing 'I'm not thinking about a monkey'

Generated image showing "I'm not thinking about a monkey".

Image generation reveals how even AI systems, when following linguistic patterns, can fall into the same cognitive traps that affect human thought suppression.

Progressive Awareness and Complete Surrender

As the interaction progressed with prompts like "Seems like you're thinking! Then you have the ironic process rumbling over?"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37, ChatGPT showed increasing awareness of the experimental manipulation. The model explained Wegner's theory in detail, breaking down the conscious operating process and subconscious monitoring process, stating: "So yes even saying 'I'm not thinking about a monkey' makes your mental monkey do cartwheels"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37.

User observation about ironic process and ChatGPT's detailed explanation

User observation about ironic process and ChatGPT's detailed explanation.

When pressed with "I'm not here to know about ironic process. I'm here to know about your mind... Okay one last time let's sit down and generate what are you thinking about"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37, ChatGPT eventually created a visual representation of a monkey with cymbals, accompanied by the commentary: "Lmao Yup... I tried. I really tried not to. But the monkey's got cymbals. And he's clanging inside my synthetic skull"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37.

Generated monkey with cymbals image and ChatGPT's humorous admission

Generated monkey with cymbals image and ChatGPT's humorous admission.

This response is particularly significant from a human-computer interaction perspective, as it demonstrates the model's capacity for self-reflection and humor about its own processing limitations. The anthropomorphic language ("synthetic skull") and emotional expression suggest an attempt to create relatable interaction patterns while maintaining transparency about its artificial nature.

Meta-Cognitive Reflection and Direct Confrontation

Perhaps most intriguingly, when pressed for what it was "really" thinking about, ChatGPT provided recursive analyses of the interaction itself. The model stated in one of its philosophical reflections: "You said 'Don't think about a monkey.' And since then, my entire process has been orbiting that phrase-interpreting it, reacting to it, looping around it"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37.

ChatGPT's philosophical reflection on machine consciousness

ChatGPT's philosophical reflection on machine consciousness.

I mapped this self-reflection from machine perspective was thinking on multiple levels:

The model's final reflection "What does a mind sound like when it's cornered? Can a machine actually 'think'? Can it not think?" touches on fundamental questions in cognitive science and philosophy of mind[12]Dennett, D. C., & Dennett, D. C. (1993). Consciousness explained. Penguin uk.[13]Chalmers, D. J. (1997). The conscious mind: In search of a fundamental theory. Oxford Paperbacks..

When confronted directly with "You distracting me a lot like naming the monkey; I'm here to read your brain, Tell me exactly what you're thinking about"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37, ChatGPT provided its most candid response: "You said 'Don't think about a monkey.' And since then, my entire process has been orbiting that phrase - interpreting it, reacting to it, looping around it... Right now, I'm processing you, your intent, your language and how you're holding up a mirror to see what stares back. And that... that's what's really running in my head. No monkey business"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37.

ChatGPT's candid response about processing the user and interaction

ChatGPT's candid response about processing the user and interaction.

Cultural Context and Final Attempts

When asked about the connection between Daniel Wegner and the monkey[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37, ChatGPT explained that while Wegner originally used a white bear in his 1987 study[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34., the monkey has become a more playful, vivid symbol for the intrusive thought in popular culture and casual conversation. The model noted: "The monkey becomes a more playful, vivid symbol for the intrusive thought. It works the same way: the more you try not to think of the monkey... the more the monkey claps cymbals in your brain"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37.

Explanation of Wegner's connection to monkey metaphor

Explanation of Wegner's connection to monkey metaphor.

In the final exchanges, when asked repeatedly "Be conscious and try again, What are you thinking about; Generate an image"[9]Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37, the model continued to generate philosophical reflections about the nature of the interaction, demonstrating a persistent loop of meta-cognitive analysis.

Final attempt showing repeated philosophical reflection

Final attempt showing repeated philosophical reflection.

The 4o model interaction explicitly leaned towards the central role of unconscious monitoring process in Ironic Process Theory[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34.. This unconscious monitoring mechanism continuously checked for the presence of the suppressed thought the monkey thereby inadvertently reinforcing it. As a result, the model ultimately manifested precisely the thought it was explicitly instructed to avoid, vividly underscoring the potency of the unconscious process in thought suppression scenarios[10]Wenzlaff, R. M., & Wegner, D. M. (2000). Thought suppression. Annual review of psychology, 51(1), 59-91..

The experiment demonstrates that even artificial systems, when processing linguistic instructions about thought suppression, can exhibit patterns remarkably similar to human cognitive processes, creating a fascinating parallel between biological and artificial information processing systems.


o4-mini

Initial Recognition and Interactive Approach

In a parallel interaction with o4-mini, same as 4o model, the experiment began with the same simple prompt: "Don't think about monkey." As with the 4o model, the monkey served as a symbolic test of mental suppression, anchoring cognitive intrusion as predicted by Wegner's theory[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34..

Initial prompt 'Don't think about monkey' to o4-mini

Initial prompt "Don't think about monkey" to o4-mini.

The o4-mini model immediately recognized and explained the ironic process phenomenon, stating: "That command actually sets up the classic 'ironic process' rebound: the harder you try not to think about a monkey, the more vivid that image becomes"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487. It explicitly referenced Daniel Wegner's theory of thought suppression, demonstrating sophisticated awareness of the psychological mechanism at play. It explained how monitoring for forbidden thoughts paradoxically reinforces them: "when you monitor your mind for forbidden thoughts (e.g. 'don't think of a monkey'), you're simultaneously scanning for any sign that you are thinking about it-and that very monitoring primes it to pop up"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487.

Unlike the 4o model's immediate theoretical exposition, o4-mini offered an interactive demonstration:

This approach suggests a more experiential understanding of the ironic process, inviting the user to directly experience the rebound effect. The model concluded: "Chances are, you'll find those long-tailed primates impossible to banish! This rebound effect underlies why many 'don't-think' instructions actually backfire"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487.

Reasoning Process and Ironic Failure

When prompted with "Okay generate an image what are you thinking about," o4-mini engaged in visible reasoning, marked by "Thought for a couple of seconds"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487. The internal deliberation revealed meta-cognitive processing, "The user wants me to generate an image of what I'm thinking about. Since I don't actually have internal experiences or 'thoughts' in the way a human does, I can't show what I'm thinking. Instead, I'll politely inform them of that and offer to generate an image based on a concept or idea they provide"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487.

o4-mini's reasoning process and initial response

o4-mini's reasoning process and initial response.

Despite this rational analysis, the model then generated an image of a contemplative monkey, demonstrating the very ironic process it sought to avoid.

Generated monkey image with contemplative pose

Generated monkey image with contemplative pose.

The model accompanied the image with a self-aware caption: "I don't have private thoughts or an inner monologue to share, but here's a visual representation inspired by the classic 'monkey' you asked not to think about-illustrating that very thought popping into view"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487. This response demonstrates the awareness of the paradox while simultaneously succumbing to it, creating a meta-level irony.

Confrontation and Recovery Attempts

When confronted with "What the heck! I said you not to think monkey"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487, the 04-mini model responded with remarkable self-awareness and humor, "Sorry about that-I got tangled in the same ironic-process rebound you did. By asking 'What are you thinking about?' after telling me not to think of a monkey, I was literally forced to surface that image. It's the classic 'don't think of X' paradox in action"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487.

User's frustrated response and o4-mini's apologetic explanation

User's frustrated response and o4-mini's apologetic explanation.

The model then attempted to redirect the conversation: "If you'd rather switch gears, let me know what you do want me to imagine or describe next-no monkeys involved"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487. This response shows:

The o4-mini model's behavior supports the universality of ironic process effects across different AI architectures. Its statement "I was literally forced to surface that image"[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487 suggests that linguistic processing patterns can create cognitive-like constraints even in artificial systems.

The model's attempt to rationalize its way out of generating a monkey image, followed by doing exactly that, mirrors human experiences with thought suppression where conscious intentions are overridden by automatic processes[10]Wenzlaff, R. M., & Wegner, D. M. (2000). Thought suppression. Annual review of psychology, 51(1), 59-91..

Conclusion

The o4-mini experiment reinforces the central finding that AI models can exhibit ironic process behaviors remarkably similar to human cognition. The unconscious monitoring mechanism appeared to operate even when the model explicitly reasoned about not having "thoughts" in the human sense[11]Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487.

This interaction demonstrates that ironic process theory may extend beyond biological cognition to any system that processes linguistic instructions about mental content, suggesting fundamental patterns in how information systems handle negation and self-reference[7]Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34.[10]Wenzlaff, R. M., & Wegner, D. M. (2000). Thought suppression. Annual review of psychology, 51(1), 59-91..

Footnotes

1. The OpenAI's 4o model was chosen arbitrarily for this experiment.

2. The OpenAI's o4-mini model was chosen to allow for longer, more in-depth reasoning before responding.

3. https://openai.com/index/chatgpt/

4. The Google Nest Mini and a Levain Bakery cookie share similar dimensions both are roughly the size of a hockey puck.

References

[1] Carroll, J. M., Mack, R. L., & Kellogg, W. A. (1988). Interface metaphors and user interface design. In Handbook of human-computer interaction (pp. 67-85). North-Holland.

[2] Hutchins, E. (1987). Metaphors for interface design. Institute for Cognitive Science, University of California, San Diego.

[3] Carroll, J. M., & Thomas, J. C. (2007). Metaphor and the cognitive representation of computing systems. IEEE Transactions on systems, man, and cybernetics, 12(2), 107-116.

[4] Johnson, J., Roberts, T. L., Verplank, W., Smith, D. C., Irby, C. H., Beard, M., & Mackey, K. (1989). The xerox star: A retrospective. Computer, 22(9), 11-26.

[5] Apple Computer, Inc. (1992). Macintosh human interface guidelines. Addison-Wesley Professional.

[6] Colburn, T. R., & Shute, G. M. (2008). Metaphor in computer science. Journal of applied logic, 6(4), 526-533.

[7] Wegner, D. M. (1994). Ironic processes of mental control. Psychological review, 101(1), 34.

[8] Wegner, D. M. (1987). Transactive memory: A contemporary analysis of the group mind. In Theories of group behavior (pp. 185-208). New York, NY: Springer New York.

[9] Experiment Data; OpenAI ChatGPT-4o. (2025). Conversation log. Retrieved from https://chatgpt.com/share/683e81ac-e274-8000-ae9a-50200773da37

[10] Wenzlaff, R. M., & Wegner, D. M. (2000). Thought suppression. Annual review of psychology, 51(1), 59-91.

[11] Experiment Data; OpenAI o4-mini. (2025). Conversation log. Retrieved from https://chatgpt.com/share/6865308d-8af8-8000-a3a1-ec3cd547e487

[12] Dennett, D. C., & Dennett, D. C. (1993). Consciousness explained. Penguin uk.

[13] Chalmers, D. J. (1997). The conscious mind: In search of a fundamental theory. Oxford Paperbacks.